Goto

Collaborating Authors

 cell assembly


Text-to-Battery Recipe: A language modeling-based protocol for automatic battery recipe extraction and retrieval

Lee, Daeun, Choi, Jaewoong, Mizuseki, Hiroshi, Lee, Byungju

arXiv.org Artificial Intelligence

Recent studies have increasingly applied natural language processing (NLP) to automatically extract experimental research data from the extensive battery materials literature. Despite the complex process involved in battery manufacturing -- from material synthesis to cell assembly -- there has been no comprehensive study systematically organizing this information. In response, we propose a language modeling-based protocol, Text-to-Battery Recipe (T2BR), for the automatic extraction of end-to-end battery recipes, validated using a case study on batteries containing LiFePO4 cathode material. We report machine learning-based paper filtering models, screening 2,174 relevant papers from the keyword-based search results, and unsupervised topic models to identify 2,876 paragraphs related to cathode synthesis and 2,958 paragraphs related to cell assembly. Then, focusing on the two topics, two deep learning-based named entity recognition models are developed to extract a total of 30 entities -- including precursors, active materials, and synthesis methods -- achieving F1 scores of 88.18% and 94.61%. The accurate extraction of entities enables the systematic generation of 165 end-toend recipes of LiFePO4 batteries. Our protocol and results offer valuable insights into specific trends, such as associations between precursor materials and synthesis methods, or combinations between different precursor materials. We anticipate that our findings will serve as a foundational knowledge base for facilitating battery-recipe information retrieval. The proposed protocol will significantly accelerate the review of battery material literature and catalyze innovations in battery design and development.


A probabilistic latent variable model for detecting structure in binary data

Warner, Christopher, Ruda, Kiersten, Sommer, Friedrich T.

arXiv.org Machine Learning

We introduce a novel, probabilistic binary latent variable model to detect noisy or approximate repeats of patterns in sparse binary data. The model is based on the "Noisy-OR model" (Heckerman, 1990), used previously for disease and topic modelling. The model's capability is demonstrated by extracting structure in recordings from retinal neurons, but it can be widely applied to discover and model latent structure in noisy binary data. In the context of spiking neural data, the task is to "explain" spikes of individual neurons in terms of groups of neurons, "Cell Assemblies" (CAs), that often fire together, due to mutual interactions or other causes. The model infers sparse activity in a set of binary latent variables, each describing the activity of a cell assembly. When the latent variable of a cell assembly is active, it reduces the probabilities of neurons belonging to this assembly to be inactive. The conditional probability kernels of the latent components are learned from the data in an expectation maximization scheme, involving inference of latent states and parameter adjustments to the model. We thoroughly validate the model on synthesized spike trains constructed to statistically resemble recorded retinal responses to white noise stimulus and natural movie stimulus in data. We also apply our model to spiking responses recorded in retinal ganglion cells (RGCs) during stimulation with a movie and discuss the found structure.


Explainable AI: A Neurally-Inspired Decision Stack Framework

Olds, J. L., Khan, M. S., Nayebpour, M., Koizumi, N.

arXiv.org Artificial Intelligence

European Law now requires AI to be explainable in the context of adverse decisions affecting European Union (EU) citizens. At the same time, it is expected that there will be increasing instances of AI failure as it operates on imperfect data. This paper puts forward a neurally-inspired framework called decision stacks that can provide for a way forward in research aimed at developing explainable AI. Leveraging findings from memory systems in biological brains, the decision stack framework operationalizes the definition of explainability and then proposes a test that can potentially reveal how a given AI decision came to its conclusion.


How Insight Emerges in a Distributed, Content-addressable Memory

Gabora, Liane, Ranjan, Apara

arXiv.org Artificial Intelligence

We begin this chapter with the bold claim that it provides a neuroscientific explanation of the magic of creativity. Creativity presents a formidable challenge for neuroscience. Neuroscience generally involves studying what happens in the brain when someone engages in a task that involves responding to a stimulus, or retrieving information from memory and using it the right way, or at the right time. If the relevant information is not already encoded in memory, the task generally requires that the individual make systematic use of information that is encoded in memory. But creativity is different. It paradoxically involves studying how someone pulls out of their brain something that was never put into it! Moreover, it must be something both new and useful, or appropriate to the task at hand. The ability to pull out of memory something new and appropriate that was never stored there in the first place is what we refer to as the magic of creativity. Even if we are so fortunate as to determine which areas of the brain are active and how these areas interact during creative thought, we will not have an answer to the question of how the brain comes up with solutions and artworks that are new and appropriate. On the other hand, since the representational capacity of neurons emerges at a level that is higher than that of the individual neurons themselves, the inner workings of neurons is too low a level to explain the magic of creativity. Thus we look to a level that is midway between gross brain regions and neurons. Since creativity generally involves combining concepts from different domains, or seeing old ideas from new perspectives, we focus our efforts on the neural mechanisms underlying the representation of concepts and ideas. Thus we ask questions about the brain at the level that accounts for its representational capacity, i.e. at the level of distributed aggregates of neurons.


Cell Assemblies in Large Sparse Inhibitory Networks of Biologically Realistic Spiking Neurons

Ponzi, Adam, Wickens, Jeff

Neural Information Processing Systems

Cell assemblies exhibiting episodes of recurrent coherent activity have been observed in several brain regions including the striatum and hippocampus CA3. Here we address the question of how coherent dynamically switching assemblies appear in large networks of biologically realistic spiking neurons interacting deterministically. We show by numerical simulations of large asymmetric inhibitory networks with fixed external excitatory drive that if the network has intermediate to sparse connectivity, the individual cells are in the vicinity of a bifurcation between a quiescent and firing state and the network inhibition varies slowly on the spiking timescale, then cells form assemblies whose members show strong positive correlation, while members of different assemblies show strong negative correlation. We show that cells and assemblies switch between firing and quiescent states with time durations consistent with a power-law. Our results are in good qualitative agreement with the experimental studies. The deterministic dynamical behaviour is related to winner-less competition shown in small closed loop inhibitory networks with heteroclinic cycles connecting saddle-points.


Questions Arising from a Proto-Neural Cognitive Architecture

Huyck, Christian Robert (Middlesex University) | Byrne, Emma Louise (Middlesex University)

AAAI Conferences

A neural cognitive architecture would be an architecture based on simulated neurons, that provided a set of mechanisms for all cognitive behaviour. Moreover, this would be compatible with biological neural behaviour. As a result, such architectures can both form the basis of a fully-fledged AI and help to explain how cognition emerges from a collection of neurons in the human brain. The development of such a neural cognitive architecture is in its infancy, but a proto-architecture in the form of behaving agents entirely based on simulated neurons is described. These agents take natural language commands, view the environment, plan and act. The development of these agents has led to a series of questions that need to be addressed to advance the development of neural cognitive architectures. These questions include long posed ones where progress has been made, such as the binding and symbol grounding problems; issues about biological architectures including neural models and brain topology; issues of emergent behaviour such as short and long-term Cell Assembly dynamics; and issues of learning such as the stability-plasticity dilemma. These questions can act as a road map for the development of neural cognitive architectures and AIs based on them.


A Neurodynamical Approach to Visual Attention

Deco, Gustavo, Zihl, Josef

Neural Information Processing Systems

The psychophysical evidence for "selective attention" originates mainly from visual search experiments. In this work, we formulate a hierarchical system of interconnected modules consisting in populations of neurons for modeling the underlying mechanisms involved in selective visual attention. We demonstrate that our neural system for visual search works across the visual field in parallel but due to the different intrinsic dynamics can show the two experimentally observed modes of visual attention, namely: the serial and the parallel search mode. In other words, neither explicit model of a focus of attention nor saliencies maps are used. The focus of attention appears as an emergent property of the dynamic behavior of the system. The neural population dynamics are handled in the framework of the mean-field approximation. Consequently, the whole process can be expressed as a system of coupled differential equations.


Distributed Synchrony of Spiking Neurons in a Hebbian Cell Assembly

Horn, David, Levy, Nir, Meilijson, Isaac, Ruppin, Eytan

Neural Information Processing Systems

We investigate the behavior of a Hebbian cell assembly of spiking neurons formed via a temporal synaptic learning curve. This learning function is based on recent experimental findings. It includes potentiation for short time delays between pre-and post-synaptic neuronal spiking, and depression for spiking events occuring in the reverse order. The coupling between the dynamics of the synaptic learning and of the neuronal activation leads to interesting results. We find that the cell assembly can fire asynchronously, but may also function in complete synchrony, or in distributed synchrony.


Distributed Synchrony of Spiking Neurons in a Hebbian Cell Assembly

Horn, David, Levy, Nir, Meilijson, Isaac, Ruppin, Eytan

Neural Information Processing Systems

We investigate the behavior of a Hebbian cell assembly of spiking neurons formed via a temporal synaptic learning curve. This learning function is based on recent experimental findings. It includes potentiation for short time delays between pre-and post-synaptic neuronal spiking, and depression for spiking events occuring in the reverse order. The coupling between the dynamics of the synaptic learning and of the neuronal activation leads to interesting results. We find that the cell assembly can fire asynchronously, but may also function in complete synchrony, or in distributed synchrony.


A Neurodynamical Approach to Visual Attention

Deco, Gustavo, Zihl, Josef

Neural Information Processing Systems

The psychophysical evidence for "selective attention" originates mainly from visual search experiments. In this work, we formulate a hierarchical system of interconnected modules consisting in populations of neurons for modeling the underlying mechanisms involved in selective visual attention. We demonstrate that our neural system for visual search works across the visual field in parallel but due to the different intrinsic dynamics can show the two experimentally observed modes of visual attention, namely: the serial and the parallel search mode. In other words, neither explicit model of a focus of attention nor saliencies maps are used. The focus of attention appears as an emergent property of the dynamic behavior of the system. The neural population dynamics are handled in the framework of the mean-field approximation. Consequently, the whole process can be expressed as a system of coupled differential equations.